Artificial Intelligence Nanodegree

Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Use a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 6: Write your Algorithm
  • Step 7: Test Your Algorithm

Step 0: Import Datasets

Import Dog Dataset

In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the load_files function from the scikit-learn library:

  • train_files, valid_files, test_files - numpy arrays containing file paths to images
  • train_targets, valid_targets, test_targets - numpy arrays containing onehot-encoded classification labels
  • dog_names - list of string-valued dog breed names for translating labels
In [2]:
from sklearn.datasets import load_files       
from keras.utils import np_utils
import numpy as np
from glob import glob

# define function to load train, test, and validation datasets
def load_dataset(path):
    data = load_files(path)
    dog_files = np.array(data['filenames'])
    dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
    return dog_files, dog_targets

# load train, test, and validation datasets
train_files, train_targets = load_dataset('dogImages/train')
valid_files, valid_targets = load_dataset('dogImages/valid')
test_files, test_targets = load_dataset('dogImages/test')

# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))]

# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
Using TensorFlow backend.
There are 133 total dog categories.
There are 8351 total dog images.

There are 6680 training dog images.
There are 835 validation dog images.
There are 836 test dog images.
In [3]:
print(train_targets[0])
[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.  0.  0.  0.  0.  0.]

Import Human Dataset

In the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array human_files.

In [4]:
import random
random.seed(8675309)

# load filenames in shuffled human dataset
human_files = np.array(glob("lfw/*/*"))
random.shuffle(human_files)

# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
There are 13233 total human images.

Step 1: Detect Humans

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory.

In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [5]:
import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 3

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [6]:
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer:

  • What percentage of the first 100 images in human_files have a detected human face?
    99% of the first images (99/100) in human files have detected a human face.

  • What percentage of the first 100 images in dog_files have a detected human face?
    11% of the first images (11/100) in dog files have detected a human face.

Question 2: This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?

Answer: We can help it to detect human in images that does not necessitate an image with a clearly presented face by adding augmented data. Why? Because augmented data will help our algorithm to be translation and zooming invarient.

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on each of the datasets.

In [6]:
human_files_short = human_files[:100]
dog_files_short = train_files[:100]

# Do NOT modify the code above this line.

## TODO: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.
def calculate_accuracy_human(human_files_short):
    accuracy = 0
    total_accuracy = 0
    for e, r in enumerate(human_files_short):
        result = face_detector(human_files_short[e])
        if result == True:
            accuracy += 1
            total_accuracy = accuracy/100
    return total_accuracy

def calculate_accuracy_dog(dog_files_short):
    # In this case if it's True it means that our algorithm found that the dog face is an human face
    accuracy = 0
    total_accuracy = 0
    for e, r in enumerate(dog_files_short):
        result = face_detector(dog_files_short[e])
        if result == True:
            accuracy += 1
            total_accuracy = accuracy/100
    return total_accuracy
    
    
human_accuracy = calculate_accuracy_human(human_files_short)
print("The human face detector accuracy is :", human_accuracy, "% of the pictures")

dog_accuracy = calculate_accuracy_dog(dog_files_short)
print("The dog face detector detected human face in dog images in ", dog_accuracy, "% of the pictures")
The human face detector accuracy is : 0.99 % of the pictures
The dog face detector detected human face in dog images in  0.11 % of the pictures

(IMPLEMENTATION) OPTIONAL PART Report the performance of another face detection algorithm on the LFW dataset

Preprocessing

In [7]:
# Get the data
human_train = human_files[:10585] #80%
human_test = human_files[10586:11910] #10%
human_valid = human_files[11911:] #10%

human_label = 1
dog_label = 0


print("There are ", len(human_train), " pictures in human_train")
print("There are ", len(human_test), " pictures in human_test")
print("There are ", len(human_valid), " pictures in human_valid")

# Preprocessing function
def preprocessing(imagePath):
    # Read the image
    # load color (BGR) image
    image = cv2.imread(imagePath)
    
    # Convert BGR image to RGB
    color = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    
    # Resize the image
    final_image = cv2.resize(color, (224,224))
    
    return final_image

# When we have this data, we want to preprocessing it (convert to RGB + resize it) to be usable for our NN.

# TRAIN
human_train_X = [preprocessing(image) for image in human_train]
# [Human, Dog]
human_train_Y = np.array([[1., 0.] for _ in range(len(human_train_X))])

dog_train_X = [preprocessing(image) for image in train_files]
dog_train_Y = np.array([[0., 1.] for _ in range(len(train_files))])

X_train = np.concatenate((human_train_X, dog_train_X), axis=0)
Y_train = np.concatenate((human_train_Y, dog_train_Y), axis=0)

# TEST
human_test_X = [preprocessing(image) for image in human_test]
human_test_Y = np.array([[1., 0.] for _ in range(len(human_test_X))])

dog_test_X = [preprocessing(image) for image in test_files]
dog_test_Y = np.array([[0., 1.] for _ in range(len(test_files))])

X_test = np.concatenate((human_test_X, dog_test_X), axis=0)
Y_test = np.concatenate((human_test_Y, dog_test_Y), axis=0)

# VALIDATION
human_valid_X = [preprocessing(image) for image in human_valid]
human_valid_Y = np.array([[1., 0.] for _ in range(len(human_valid_X))])

dog_valid_X = [preprocessing(image) for image in valid_files]
dog_valid_Y = np.array([[0., 1.] for _ in range(len(valid_files))])

X_valid = np.concatenate((human_valid_X, dog_valid_X), axis=0)
Y_valid = np.concatenate((human_valid_Y, dog_valid_Y), axis=0)
There are  10585  pictures in human_train
There are  1324  pictures in human_test
There are  1322  pictures in human_valid

Model

In [8]:
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.advanced_activations import ELU
from keras.layers.normalization import BatchNormalization

human_model = Sequential()

human_model.add(Conv2D(filters=16,
                kernel_size=3,
                strides=1,
                padding="same",
                input_shape=(224, 224, 3)))

human_model.add(BatchNormalization())

human_model.add(ELU(alpha=1.0))

human_model.add(MaxPooling2D(pool_size=2))

human_model.add(Conv2D(filters=32,
                kernel_size=5,
                strides=1,
                padding="same"))

human_model.add(BatchNormalization())

human_model.add(ELU(alpha=1.0))

human_model.add(MaxPooling2D(pool_size=2))

human_model.add(Conv2D(filters=64,
                kernel_size=5,
                strides=1,
                padding="same"))

human_model.add(BatchNormalization())

human_model.add(ELU(alpha=1.0))

human_model.add(Dropout(0.2))

human_model.add(Flatten())

human_model.add(Dense(128, activation="relu"))

human_model.add(Dropout(0.3))

human_model.add(Dense(64, activation="relu"))

human_model.add(Dense(2, activation="softmax"))

human_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 224, 224, 16)      448       
_________________________________________________________________
batch_normalization_1 (Batch (None, 224, 224, 16)      64        
_________________________________________________________________
elu_1 (ELU)                  (None, 224, 224, 16)      0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 112, 112, 16)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 112, 112, 32)      12832     
_________________________________________________________________
batch_normalization_2 (Batch (None, 112, 112, 32)      128       
_________________________________________________________________
elu_2 (ELU)                  (None, 112, 112, 32)      0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 56, 56, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 56, 56, 64)        51264     
_________________________________________________________________
batch_normalization_3 (Batch (None, 56, 56, 64)        256       
_________________________________________________________________
elu_3 (ELU)                  (None, 56, 56, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 56, 56, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 200704)            0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               25690240  
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 130       
=================================================================
Total params: 25,763,618
Trainable params: 25,763,394
Non-trainable params: 224
_________________________________________________________________

Training

In [ ]:
from time import time
from keras.callbacks import ModelCheckpoint  
from keras.preprocessing.image import ImageDataGenerator

human_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

start = time()

## Here we'll do data augmentation
## Data augmentation for training
"""train_augmentation = ImageDataGenerator(
                rotation_range=40,
                width_shift_range=0.2,
                height_shift_range=0.2,
                shear_range=0.2,
                zoom_range=0.2, 
                horizontal_flip = True)

train_augmentation.fit(X_train)


valid_augmentation = ImageDataGenerator(
                rescale=1. /255,
                width_shift_range=0.2,
                height_shift_range=0.2,
                rotation_range=20,
                horizontal_flip = True)

valid_augmentation.fit(X_valid)

human_model.fit_generator(train_augmentation.flow(X_train, Y_train, batch_size=20),
           steps_per_epoch=train_tensors.shape[0] // batch_size,
            epochs=epochs,
            verbose=1,
            callbacks=[checkpointer],
            validation_data=(X_valid, Y_valid)
            )    
"""

checkpointer = ModelCheckpoint(filepath='saved_models/human_detector.hdf5', 
                               verbose=1, save_best_only=True)

batch_size = 32
epochs = 10

human_model.fit(X_train, 
                Y_train, 
                batch_size=batch_size, 
                epochs=epochs,
                validation_data = (X_valid, Y_valid),
                callbacks=[checkpointer],
                verbose=1,
                shuffle=True
               )
    
end = time()
total_time = end - start
print("The total computation time is {} ".format(total_time/60), " minutes") 

Because I've already trained this CNN but it takes about 4 hours, and unfortunately I've click on Restart and click output. Consequently I will not retrain it I'll just load its weights

Load weights and testing

In [9]:
human_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
human_model.load_weights('saved_models/human_detector.hdf5')

prediction_score = human_model.evaluate(X_test, Y_test, verbose=0)
accuracy_human_model = prediction_score[1]*100
print('Accuracy: ', accuracy_human_model, "%")
Accuracy:  95.0 %
In [10]:
predictions_nn = human_model.predict_classes(X_test)

size_human_test = len(human_test_Y)
size_dog_test = len(dog_test_Y)

human_detected_as_human = sum([predictions_nn[e] == 0 for e in range(size_human_test)])
# Calculate the total accuracy
human_detected_as_human_accuracy = (human_detected_as_human / size_human_test)*100

dog_detected_as_dog = sum([predictions_nn[e] == 1 for e in range(size_dog_test, size_human_test + size_dog_test)])
dog_detected_as_dog_accuracy = (dog_detected_as_dog / size_dog_test)*100

print("The human face detector accuracy is :", human_detected_as_human_accuracy, "% of the pictures")
print("The dog face detector accuracy is :", dog_detected_as_dog_accuracy, "% of the pictures")
2160/2160 [==============================] - 155s   
The human face detector accuracy is : 99.0181268882 % of the pictures
The dog face detector accuracy is : 89.1148325359 % of the pictures
In [10]:
def face_detector_CNN(img_path):
    
    # Preprocess
    image = preprocessing(img_path)
    
    image = np.expand_dims(image, axis=0)
    
    return human_model.predict(image)[0][0]

Step 2: Detect Dogs

In this section, we use a pre-trained ResNet-50 model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.

In [11]:
from keras.applications.resnet50 import ResNet50

# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')

Pre-process the Data

When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape

$$ (\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}), $$

where nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively.

The path_to_tensor function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape

$$ (1, 224, 224, 3). $$

The paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape

$$ (\text{nb_samples}, 224, 224, 3). $$

Here, nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!

In [12]:
from keras.preprocessing import image                  
from tqdm import tqdm

def path_to_tensor(img_path):
    # loads RGB image as PIL.Image.Image type
    img = image.load_img(img_path, target_size=(224, 224))
    
    # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
    x = image.img_to_array(img)
    # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
    return np.expand_dims(x, axis=0)


def paths_to_tensor(img_paths):
    list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
    return np.vstack(list_of_tensors)

Making Predictions with ResNet-50

Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function preprocess_input. If you're curious, you can check the code for preprocess_input here.

Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the predict method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the ResNet50_predict_labels function below.

By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this dictionary.

In [13]:
from keras.applications.resnet50 import preprocess_input, decode_predictions

def ResNet50_predict_labels(img_path):
    # returns prediction vector for image located at img_path
    img = preprocess_input(path_to_tensor(img_path))
    return np.argmax(ResNet50_model.predict(img))

Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive).

We use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [14]:
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
    prediction = ResNet50_predict_labels(img_path)
    return ((prediction <= 268) & (prediction >= 151)) 

(IMPLEMENTATION) Assess the Dog Detector

Question 3: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?

  • What percentage of the images in dog_files_short have a detected dog? 100%

Answer:
Resnet found dog faces in human pictures in 1% of them
Resnet found dog faces in dog pictures in 100% of them

In [11]:
import cv2
import matplotlib.pyplot as plt
%matplotlib inline

def visualize_img(img_path, ax):
    img = cv2.imread(img_path)
    ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    
fig = plt.figure(figsize=(20, 10))
for i in range(12):
    ax = fig.add_subplot(3, 4, i + 1, xticks=[], yticks=[])
    visualize_img(train_files[i], ax)
In [16]:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.

def calculate_accuracy_human_resnet(human_files_short):
    accuracy = 0
    total_accuracy = 0
    for e, r in enumerate(human_files_short):
        result = dog_detector(human_files_short[e])
        if result == True:
            accuracy += 1
            total_accuracy = (accuracy/100)*100
    return total_accuracy

def calculate_accuracy_dog_resnet(dog_files_short):
    # In this case if it's True it means that our algorithm found that the dog face is an human face
    accuracy = 0
    total_accuracy = 0
    for e, r in enumerate(dog_files_short):
        result = dog_detector(dog_files_short[e])
        if result == True:
            accuracy += 1
            total_accuracy = (accuracy/100)*100
    return total_accuracy
    
    
human_accuracy = calculate_accuracy_human_resnet(human_files_short)
print("Resnet found dog faces in human pictures in :", human_accuracy, " % of the human pictures")

dog_accuracy = calculate_accuracy_dog_resnet(dog_files_short)
print("Resnet found dog faces in dog pictures in ", dog_accuracy, " % of the pictures")
Resnet found dog faces in human pictures in : 0  % of the human pictures
Resnet found dog faces in dog pictures in  100.0  % of the pictures

Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

Pre-process the Data

We rescale the images by dividing every pixel in every image by 255.

In [15]:
from PIL import ImageFile                            
ImageFile.LOAD_TRUNCATED_IMAGES = True
from time import time

# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
100%|██████████| 6680/6680 [01:00<00:00, 110.55it/s]
100%|██████████| 835/835 [00:06<00:00, 122.10it/s]
100%|██████████| 836/836 [00:06<00:00, 123.56it/s]

(IMPLEMENTATION) Model Architecture First attempt

Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:

    model.summary()

We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:

Sample CNN

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.

Answer:

The strategies

Before speaking about the architecture we must speak about some strategies used to improve the performance of our classifier: We used 4 strategies:

  • Data Augmentation for rotation and translation invariance
  • Using ELU instead of ReLU as the activation function
  • Using Batch Normalization to reduce internal covariate shift in neural networks
  • Using dropout to avoid overfitting
  • Data Augmentation for rotation and translation invariance

    As explained in this article Building powerful image classification models using very little data , because we have little data and we want that CNN to be rotation and translation invariance, we need to do data augmentation feed our data folder with new transformed data (rotated, translated, zoomed...).

    Using ELU instead of ReLU as activation function

    Instead of using the traditional ReLU activation function, I decided to use ELU thanks to this article Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) that explains that ELUs have better results for CNN than ReLU.

    Using Batch Normalization to reduce internal covariate shift

    We used Batch Normalization to reduce internal covariate shift in our CNN as explained in this article It reduces the training time significantly.

    Using Dropout to avoid overfitting

    We used Dropout in order to avoid overfitting (because if some neurons are turned off, the neural network will not rely strongly on neurons).

    The architecture

    We use a classical image classification architecture with 3 Conv2D with their number of filters doubled each time. Followed by a MaxPooling each time to downsample the convolutional output. Finally, the originallity come from adding a global average pool and dropout layers

    • Conv2D with 16 filters
    • BatchNormalization
    • ELU as activation function
    • Maxpool
    • Conv2D with 32 filters
    • BatchNormalization
    • ELU as activation function
    • Maxpool
    • Conv2D with 64 filters
    • BatchNormalization
    • ELU as activation function
    • Global average pool
    • Dropout
    • Dense
    • Dropout
    • Dense with sigmoid as an activation function
    In [16]:
    from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
    from keras.layers import Dropout, Flatten, Dense
    from keras.models import Sequential
    from keras.layers.advanced_activations import ELU
    from keras.layers.normalization import BatchNormalization
    
    model = Sequential()
    
    model.add(Conv2D(filters=16,
                    kernel_size=2,
                    strides=1,
                    padding="same",
                    input_shape=(224, 224, 3)))
    
    model.add(BatchNormalization())
    
    model.add(ELU(alpha=1.0))
    
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(filters=32,
                    kernel_size=2,
                    strides=1,
                    padding="same"))
    
    model.add(BatchNormalization())
    
    model.add(ELU(alpha=1.0))
    
    model.add(MaxPooling2D(pool_size=2))
    
    model.add(Conv2D(filters=64,
                    kernel_size=2,
                    strides=1,
                    padding="same"))
    
    model.add(BatchNormalization())
    
    model.add(ELU(alpha=1.0))
    
    model.add(GlobalAveragePooling2D())
    
    model.add(Dropout(0.4))
    
    model.add(Dense(64, activation="relu"))
    
    model.add(Dropout(0.3))
    
    model.add(Dense(133, activation="softmax"))
    
    model.summary()
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    conv2d_4 (Conv2D)            (None, 224, 224, 16)      208       
    _________________________________________________________________
    batch_normalization_4 (Batch (None, 224, 224, 16)      64        
    _________________________________________________________________
    elu_4 (ELU)                  (None, 224, 224, 16)      0         
    _________________________________________________________________
    max_pooling2d_4 (MaxPooling2 (None, 112, 112, 16)      0         
    _________________________________________________________________
    conv2d_5 (Conv2D)            (None, 112, 112, 32)      2080      
    _________________________________________________________________
    batch_normalization_5 (Batch (None, 112, 112, 32)      128       
    _________________________________________________________________
    elu_5 (ELU)                  (None, 112, 112, 32)      0         
    _________________________________________________________________
    max_pooling2d_5 (MaxPooling2 (None, 56, 56, 32)        0         
    _________________________________________________________________
    conv2d_6 (Conv2D)            (None, 56, 56, 64)        8256      
    _________________________________________________________________
    batch_normalization_6 (Batch (None, 56, 56, 64)        256       
    _________________________________________________________________
    elu_6 (ELU)                  (None, 56, 56, 64)        0         
    _________________________________________________________________
    global_average_pooling2d_1 ( (None, 64)                0         
    _________________________________________________________________
    dropout_3 (Dropout)          (None, 64)                0         
    _________________________________________________________________
    dense_4 (Dense)              (None, 64)                4160      
    _________________________________________________________________
    dropout_4 (Dropout)          (None, 64)                0         
    _________________________________________________________________
    dense_5 (Dense)              (None, 133)               8645      
    =================================================================
    Total params: 23,797
    Trainable params: 23,573
    Non-trainable params: 224
    _________________________________________________________________
    

    Compile the Model

    In [17]:
    model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
    

    (IMPLEMENTATION) Train the Model

    Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.

    You are welcome to augment the training data, but this is not a requirement.

    In [16]:
    from keras.callbacks import ModelCheckpoint  
    from keras.preprocessing.image import ImageDataGenerator
    ### TODO: specify the number of epochs that you would like to use to train the model.
    
    epochs = 25
    
    start = time()
    
    ## Here we'll do data augmentation
    # Data augmentation for training
    train_datagen_augmentation = ImageDataGenerator(
                    rotation_range=40,
                    width_shift_range=0.2,
                    height_shift_range=0.2,
                    shear_range=0.2,
                    zoom_range=0.2, 
                    horizontal_flip = True)
    
    train_datagen_augmentation.fit(train_tensors)
    
    ### Do NOT modify the code below this line.
    batch_size = 20
    
    checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', 
                                   verbose=1, save_best_only=True)
    
    
    model.fit_generator(train_datagen_augmentation.flow(train_tensors, train_targets, batch_size=batch_size),
                # Thanks to Alexis Cook  
                steps_per_epoch=train_tensors.shape[0] // batch_size,
                epochs=epochs,
                verbose=1,
                callbacks=[checkpointer],
                validation_data=(valid_tensors, valid_targets)
                )
                  
    end = time()
    total_time = end - start
    print("The total computation time is {} ".format(total_time/60), " minutes")
    
    Epoch 1/25
    333/334 [============================>.] - ETA: 1s - loss: 4.8937 - acc: 0.0110Epoch 00000: val_loss improved from inf to 4.85731, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 386s - loss: 4.8936 - acc: 0.0109 - val_loss: 4.8573 - val_acc: 0.0156
    Epoch 2/25
    333/334 [============================>.] - ETA: 1s - loss: 4.8195 - acc: 0.0170Epoch 00001: val_loss improved from 4.85731 to 4.79323, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.8193 - acc: 0.0171 - val_loss: 4.7932 - val_acc: 0.0168
    Epoch 3/25
    333/334 [============================>.] - ETA: 1s - loss: 4.7797 - acc: 0.0213Epoch 00002: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.7797 - acc: 0.0213 - val_loss: 4.8146 - val_acc: 0.0192
    Epoch 4/25
    333/334 [============================>.] - ETA: 1s - loss: 4.7388 - acc: 0.0281Epoch 00003: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.7385 - acc: 0.0280 - val_loss: 4.7991 - val_acc: 0.0228
    Epoch 5/25
    333/334 [============================>.] - ETA: 1s - loss: 4.7087 - acc: 0.0267Epoch 00004: val_loss improved from 4.79323 to 4.70867, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.7089 - acc: 0.0266 - val_loss: 4.7087 - val_acc: 0.0240
    Epoch 6/25
    333/334 [============================>.] - ETA: 1s - loss: 4.6842 - acc: 0.0255Epoch 00005: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.6839 - acc: 0.0256 - val_loss: 5.0019 - val_acc: 0.0228
    Epoch 7/25
    333/334 [============================>.] - ETA: 1s - loss: 4.6557 - acc: 0.0296Epoch 00006: val_loss improved from 4.70867 to 4.67745, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.6556 - acc: 0.0296 - val_loss: 4.6775 - val_acc: 0.0275
    Epoch 8/25
    333/334 [============================>.] - ETA: 1s - loss: 4.6382 - acc: 0.0317Epoch 00007: val_loss improved from 4.67745 to 4.63137, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.6388 - acc: 0.0317 - val_loss: 4.6314 - val_acc: 0.0359
    Epoch 9/25
    333/334 [============================>.] - ETA: 1s - loss: 4.6125 - acc: 0.0359Epoch 00008: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.6124 - acc: 0.0361 - val_loss: 4.6469 - val_acc: 0.0299
    Epoch 10/25
    333/334 [============================>.] - ETA: 1s - loss: 4.6032 - acc: 0.0390Epoch 00009: val_loss improved from 4.63137 to 4.62699, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.6019 - acc: 0.0394 - val_loss: 4.6270 - val_acc: 0.0359
    Epoch 11/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5856 - acc: 0.0378Epoch 00010: val_loss improved from 4.62699 to 4.60934, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.5853 - acc: 0.0380 - val_loss: 4.6093 - val_acc: 0.0323
    Epoch 12/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5716 - acc: 0.0380Epoch 00011: val_loss improved from 4.60934 to 4.54866, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.5716 - acc: 0.0382 - val_loss: 4.5487 - val_acc: 0.0431
    Epoch 13/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5548 - acc: 0.0426Epoch 00012: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.5545 - acc: 0.0427 - val_loss: 4.5515 - val_acc: 0.0491
    Epoch 14/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5389 - acc: 0.0411Epoch 00013: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.5383 - acc: 0.0410 - val_loss: 4.6289 - val_acc: 0.0275
    Epoch 15/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5316 - acc: 0.0410Epoch 00014: val_loss improved from 4.54866 to 4.54833, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 384s - loss: 4.5318 - acc: 0.0409 - val_loss: 4.5483 - val_acc: 0.0419
    Epoch 16/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5176 - acc: 0.0434Epoch 00015: val_loss did not improve
    334/334 [==============================] - 384s - loss: 4.5179 - acc: 0.0434 - val_loss: 4.5754 - val_acc: 0.0371
    Epoch 17/25
    333/334 [============================>.] - ETA: 1s - loss: 4.5163 - acc: 0.0449Epoch 00016: val_loss did not improve
    334/334 [==============================] - 384s - loss: 4.5163 - acc: 0.0448 - val_loss: 4.5957 - val_acc: 0.0419
    Epoch 18/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4944 - acc: 0.0459Epoch 00017: val_loss did not improve
    334/334 [==============================] - 384s - loss: 4.4944 - acc: 0.0461 - val_loss: 4.5706 - val_acc: 0.0431
    Epoch 19/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4991 - acc: 0.0474Epoch 00018: val_loss improved from 4.54833 to 4.50078, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 384s - loss: 4.4992 - acc: 0.0475 - val_loss: 4.5008 - val_acc: 0.0539
    Epoch 20/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4845 - acc: 0.0533Epoch 00019: val_loss did not improve
    334/334 [==============================] - 384s - loss: 4.4845 - acc: 0.0534 - val_loss: 4.6190 - val_acc: 0.0539
    Epoch 21/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4726 - acc: 0.0468Epoch 00020: val_loss did not improve
    334/334 [==============================] - 384s - loss: 4.4720 - acc: 0.0469 - val_loss: 4.6432 - val_acc: 0.0383
    Epoch 22/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4757 - acc: 0.0508Epoch 00021: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.4752 - acc: 0.0506 - val_loss: 4.5608 - val_acc: 0.0431
    Epoch 23/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4701 - acc: 0.0456Epoch 00022: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.4693 - acc: 0.0458 - val_loss: 4.5404 - val_acc: 0.0443
    Epoch 24/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4573 - acc: 0.0494Epoch 00023: val_loss did not improve
    334/334 [==============================] - 385s - loss: 4.4568 - acc: 0.0493 - val_loss: 4.5084 - val_acc: 0.0419
    Epoch 25/25
    333/334 [============================>.] - ETA: 1s - loss: 4.4442 - acc: 0.0523Epoch 00024: val_loss improved from 4.50078 to 4.45919, saving model to saved_models/weights.best.from_scratch.hdf5
    334/334 [==============================] - 385s - loss: 4.4444 - acc: 0.0521 - val_loss: 4.4592 - val_acc: 0.0431
    The total computation time is 160.55984765291214   minutes
    

    Load the Model with the Best Validation Loss

    In [18]:
    model.load_weights('saved_models/weights.best.from_scratch.hdf5')
    

    Test the Model

    Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.

    In [19]:
    # get index of predicted dog breed for each image in test set
    dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
    
    # report test accuracy
    test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
    print('Test accuracy: %.4f%%' % test_accuracy)
    
    Test accuracy: 5.3828%
    

    Step 4: Use a CNN to Classify Dog Breeds

    To reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.

    Obtain Bottleneck Features

    In [20]:
    bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
    train_VGG16 = bottleneck_features['train']
    valid_VGG16 = bottleneck_features['valid']
    test_VGG16 = bottleneck_features['test']
    

    Model Architecture

    The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.

    In [21]:
    VGG16_model = Sequential()
    VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
    VGG16_model.add(Dense(133, activation='softmax'))
    
    VGG16_model.summary()
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    global_average_pooling2d_2 ( (None, 512)               0         
    _________________________________________________________________
    dense_6 (Dense)              (None, 133)               68229     
    =================================================================
    Total params: 68,229
    Trainable params: 68,229
    Non-trainable params: 0
    _________________________________________________________________
    

    Compile the Model

    In [22]:
    VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    

    Train the Model

    In [22]:
    checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', 
                                   verbose=1, save_best_only=True)
    
    VGG16_model.fit(train_VGG16, train_targets, 
              validation_data=(valid_VGG16, valid_targets),
              epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
    
    Train on 6680 samples, validate on 835 samples
    Epoch 1/20
    6360/6680 [===========================>..] - ETA: 0s - loss: 12.0572 - acc: 0.1286Epoch 00000: val_loss improved from inf to 10.52639, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 11.9785 - acc: 0.1340 - val_loss: 10.5264 - val_acc: 0.2144
    Epoch 2/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 9.8541 - acc: 0.2896Epoch 00001: val_loss improved from 10.52639 to 9.83315, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 9.8515 - acc: 0.2901 - val_loss: 9.8331 - val_acc: 0.2790
    Epoch 3/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 9.1116 - acc: 0.3603Epoch 00002: val_loss improved from 9.83315 to 9.41649, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 9.1017 - acc: 0.3608 - val_loss: 9.4165 - val_acc: 0.3281
    Epoch 4/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 8.7140 - acc: 0.3995Epoch 00003: val_loss improved from 9.41649 to 9.20268, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 8.7055 - acc: 0.3996 - val_loss: 9.2027 - val_acc: 0.3449
    Epoch 5/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 8.5592 - acc: 0.4277Epoch 00004: val_loss improved from 9.20268 to 9.11527, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 8.5257 - acc: 0.4298 - val_loss: 9.1153 - val_acc: 0.3497
    Epoch 6/20
    6640/6680 [============================>.] - ETA: 0s - loss: 8.4032 - acc: 0.4461Epoch 00005: val_loss did not improve
    6680/6680 [==============================] - 1s - loss: 8.4183 - acc: 0.4452 - val_loss: 9.1191 - val_acc: 0.3509
    Epoch 7/20
    6640/6680 [============================>.] - ETA: 0s - loss: 8.2926 - acc: 0.4523Epoch 00006: val_loss improved from 9.11527 to 8.93953, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 8.2820 - acc: 0.4527 - val_loss: 8.9395 - val_acc: 0.3581
    Epoch 8/20
    6640/6680 [============================>.] - ETA: 0s - loss: 8.0443 - acc: 0.4708- ETA: 1s - loss:Epoch 00007: val_loss improved from 8.93953 to 8.82504, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 8.0507 - acc: 0.4701 - val_loss: 8.8250 - val_acc: 0.3653
    Epoch 9/20
    6660/6680 [============================>.] - ETA: 0s - loss: 7.9664 - acc: 0.4853Epoch 00008: val_loss improved from 8.82504 to 8.71888, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.9679 - acc: 0.4850 - val_loss: 8.7189 - val_acc: 0.3641
    Epoch 10/20
    6560/6680 [============================>.] - ETA: 0s - loss: 7.8945 - acc: 0.4922Epoch 00009: val_loss did not improve
    6680/6680 [==============================] - 1s - loss: 7.9054 - acc: 0.4915 - val_loss: 8.7551 - val_acc: 0.3737
    Epoch 11/20
    6540/6680 [============================>.] - ETA: 0s - loss: 7.8031 - acc: 0.5012Epoch 00010: val_loss improved from 8.71888 to 8.65268, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.8141 - acc: 0.5006 - val_loss: 8.6527 - val_acc: 0.3844
    Epoch 12/20
    6460/6680 [============================>.] - ETA: 0s - loss: 7.6895 - acc: 0.5090Epoch 00011: val_loss improved from 8.65268 to 8.47085, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.6875 - acc: 0.5091 - val_loss: 8.4708 - val_acc: 0.3976
    Epoch 13/20
    6400/6680 [===========================>..] - ETA: 0s - loss: 7.5964 - acc: 0.5164Epoch 00012: val_loss did not improve
    6680/6680 [==============================] - 1s - loss: 7.5915 - acc: 0.5169 - val_loss: 8.5099 - val_acc: 0.3904
    Epoch 14/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 7.4700 - acc: 0.5218Epoch 00013: val_loss improved from 8.47085 to 8.29532, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.4753 - acc: 0.5216 - val_loss: 8.2953 - val_acc: 0.3904
    Epoch 15/20
    6380/6680 [===========================>..] - ETA: 0s - loss: 7.2764 - acc: 0.5332Epoch 00014: val_loss improved from 8.29532 to 8.17412, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.2993 - acc: 0.5317 - val_loss: 8.1741 - val_acc: 0.4168
    Epoch 16/20
    6380/6680 [===========================>..] - ETA: 0s - loss: 7.1735 - acc: 0.5400Epoch 00015: val_loss improved from 8.17412 to 8.10953, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.1728 - acc: 0.5401 - val_loss: 8.1095 - val_acc: 0.4156
    Epoch 17/20
    6360/6680 [===========================>..] - ETA: 0s - loss: 7.0741 - acc: 0.5481Epoch 00016: val_loss improved from 8.10953 to 8.06055, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 7.0708 - acc: 0.5476 - val_loss: 8.0606 - val_acc: 0.4144
    Epoch 18/20
    6420/6680 [===========================>..] - ETA: 0s - loss: 6.9678 - acc: 0.5558Epoch 00017: val_loss improved from 8.06055 to 7.97838, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 6.9666 - acc: 0.5555 - val_loss: 7.9784 - val_acc: 0.4228
    Epoch 19/20
    6380/6680 [===========================>..] - ETA: 0s - loss: 6.8918 - acc: 0.5669Epoch 00018: val_loss improved from 7.97838 to 7.84792, saving model to saved_models/weights.best.VGG16.hdf5
    6680/6680 [==============================] - 1s - loss: 6.9269 - acc: 0.5647 - val_loss: 7.8479 - val_acc: 0.4287
    Epoch 20/20
    6400/6680 [===========================>..] - ETA: 0s - loss: 6.9364 - acc: 0.5650Epoch 00019: val_loss did not improve
    6680/6680 [==============================] - 1s - loss: 6.9130 - acc: 0.5656 - val_loss: 7.9821 - val_acc: 0.4299
    
    Out[22]:
    <keras.callbacks.History at 0x7ff0dcb01f60>

    Load the Model with the Best Validation Loss

    In [23]:
    VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
    

    Test the Model

    Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.

    In [24]:
    # get index of predicted dog breed for each image in test set
    VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
    
    # report test accuracy
    test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
    print('Test accuracy: %.4f%%' % test_accuracy)
    
    Test accuracy: 46.7703%
    

    Predict Dog Breed with the Model

    In [25]:
    from extract_bottleneck_features import *
    
    def VGG16_predict_breed(img_path):
        # extract bottleneck features
        bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
        # obtain predicted vector
        predicted_vector = VGG16_model.predict(bottleneck_feature)
        # return dog breed that is predicted by the model
        return dog_names[np.argmax(predicted_vector)]
    

    Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)

    You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

    In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:

    The files are encoded as such:

    Dog{network}Data.npz
    
    

    where {network}, in the above filename, can be one of VGG19, Resnet50, InceptionV3, or Xception. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the bottleneck_features/ folder in the repository.

    (IMPLEMENTATION) Obtain Bottleneck Features

    In the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:

    bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz')
    train_{network} = bottleneck_features['train']
    valid_{network} = bottleneck_features['valid']
    test_{network} = bottleneck_features['test']
    In [42]:
    import os
    import zipfile
    import tarfile
    import requests
    # A big BIG thanks to Madhava Jay
    def download_file(url, path='./'):
        filename = url.split('/')[-1]
        print('Downloading {}'.format(filename))
        path = os.path.join(path, filename)
        r = requests.get(url, stream=True)
        with open(path, 'wb') as f:
            for chunk in r.iter_content(chunk_size=1024): 
                if chunk: # filter out keep-alive new chunks
                    f.write(chunk)
        print('Download complete')
        return filename
    
    def extract(archive, folder):
        print('Extracting {}'.format(archive))
        
        if (archive.endswith('tgz')):
            tar = tarfile.open(archive, 'r:gz')
            tar.extractall()
            tar.close()
        elif (archive.endswith('zip')):    
            with zipfile.ZipFile(archive, 'r') as zip_ref:
                zip_ref.extractall()
        else:
            print('Archive type {} not recognized'.format(archive))
    
        if os.path.isdir(folder):
            print('Extracting complete'.format(archive))
        else:
            print('Extracting failed'.format(archive))
            
    def download_extract(url, folder, force_download=False):
        filename = url.split('/')[-1]
        downloadPath = os.path.join(os.getcwd(), folder)
        if os.path.isdir(downloadPath) is False:
            if os.path.exists(filename):
                if force_download is False:
                    print('File {} found skipping download'.format(filename))
                else:
                    print('Forcing download of {}'.format(filename))
                    download_file(url)
                extract(filename, downloadPath)
            else:
                download_file(url)
                extract(filename, downloadPath)
    
    In [ ]:
    """
    bottleneckFeaturesXceptionUrl = "https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogXceptionData.npz"
    bottleneckFeaturesFolder = "bottleneck_features"
    download_file(bottleneckFeaturesXceptionUrl, bottleneckFeaturesFolder)
    """
    
    In [26]:
    ### TODO: Obtain bottleneck features from another pre-trained CNN.
    bottleneck_features = np.load('bottleneck_features/DogXceptionData.npz')
    train_Xception = bottleneck_features['train']
    valid_Xception = bottleneck_features['valid']
    test_Xception = bottleneck_features['test']
    

    (IMPLEMENTATION) Model Architecture

    Model 1: Xception_model with batch norm, global avg pooling, dense and dropout

    Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:

        <your model's name>.summary()
    
    

    Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

    Answer: I tried 3 differents architecture : 2 with the Xception bottleneck and one with the Inception bottleneck

    The best is the first model, Xception_model with an accuracy of 83%.

    The second model Xception_model2 is really really bad. No dropouts and no dense hidden layers may be the cause.

    The difference between the first and the third model is the bottleneck. The third model is less accurate with the same architecture than the first model: 81%.

    I think the first model is the best for different reasons:

    • First of using bottleneck_features help us to increase massively our accuracy and saving computation time.
    • Xception use Inception strategy seen in our course which leads to very good accurracy.
    • I implemented dropout which prevent overfitting.
    In [27]:
    ### TODO: Define your architecture.
    from keras.layers import Dense, Flatten, GlobalAveragePooling2D, Dropout
    from keras.layers.advanced_activations import ELU
    from keras.layers.normalization import BatchNormalization
    
    Xception_model = Sequential()
    BatchNormalization(axis=-1)
    Xception_model.add(GlobalAveragePooling2D(input_shape=train_Xception.shape[1:] ))
    
    # Xception_model.add(Flatten())
    
    Xception_model.add(Dropout(0.4))
    
    Xception_model.add(Dense(64, activation="relu"))
    
    Xception_model.add(Dropout(0.3))
    # 133 because
    
    Xception_model.add(Dense(133, activation="softmax"))
    
    Xception_model.summary()
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    global_average_pooling2d_3 ( (None, 2048)              0         
    _________________________________________________________________
    dropout_5 (Dropout)          (None, 2048)              0         
    _________________________________________________________________
    dense_7 (Dense)              (None, 64)                131136    
    _________________________________________________________________
    dropout_6 (Dropout)          (None, 64)                0         
    _________________________________________________________________
    dense_8 (Dense)              (None, 133)               8645      
    =================================================================
    Total params: 139,781
    Trainable params: 139,781
    Non-trainable params: 0
    _________________________________________________________________
    
    In [28]:
    Xception_model2 = Sequential()
    
    Xception_model2.add(Flatten(input_shape=train_Xception.shape[1:]))
    Xception_model2.add(Dense(133, activation="softmax"))
    
    Xception_model2.summary()
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    flatten_3 (Flatten)          (None, 100352)            0         
    _________________________________________________________________
    dense_9 (Dense)              (None, 133)               13346949  
    =================================================================
    Total params: 13,346,949
    Trainable params: 13,346,949
    Non-trainable params: 0
    _________________________________________________________________
    

    Model 3: Inception Model

    In [29]:
    # A big BIG thanks to Madhava Jay
    """
    bottleneckFeaturesInceptionUrl = "https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz"
    bottleneckFeaturesFolder = "bottleneck_features"
    download_file(bottleneckFeaturesInceptionUrl, bottleneckFeaturesFolder)
    """
    
    Out[29]:
    '\nbottleneckFeaturesInceptionUrl = "https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz"\nbottleneckFeaturesFolder = "bottleneck_features"\ndownload_file(bottleneckFeaturesInceptionUrl, bottleneckFeaturesFolder)\n'
    In [30]:
    ### TODO: Obtain bottleneck features from another pre-trained CNN.
    bottleneck_features = np.load('bottleneck_features/DogInceptionV3Data.npz')
    train_Inception = bottleneck_features['train']
    valid_Inception = bottleneck_features['valid']
    test_Inception = bottleneck_features['test']
    
    In [31]:
    ### TODO: Define your architecture.
    from keras.layers import Dense, Flatten, GlobalAveragePooling2D, Dropout
    from keras.layers.advanced_activations import ELU
    from keras.layers.normalization import BatchNormalization
    
    Inception_model = Sequential()
    BatchNormalization(axis=-1)
    Inception_model.add(GlobalAveragePooling2D(input_shape=train_Inception.shape[1:]))
    
    Inception_model.add(Dropout(0.4))
    
    Inception_model.add(Dense(64, activation="relu"))
    
    Inception_model.add(Dropout(0.3))
    # 133 because of breeds
    
    Inception_model.add(Dense(133, activation="softmax"))
    
    Inception_model.summary()
    
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    global_average_pooling2d_4 ( (None, 2048)              0         
    _________________________________________________________________
    dropout_7 (Dropout)          (None, 2048)              0         
    _________________________________________________________________
    dense_10 (Dense)             (None, 64)                131136    
    _________________________________________________________________
    dropout_8 (Dropout)          (None, 64)                0         
    _________________________________________________________________
    dense_11 (Dense)             (None, 133)               8645      
    =================================================================
    Total params: 139,781
    Trainable params: 139,781
    Non-trainable params: 0
    _________________________________________________________________
    

    (IMPLEMENTATION) Compile the Model

    In [32]:
    Xception_model.compile(loss="categorical_crossentropy",
                optimizer="rmsprop",
                 metrics=["accuracy"])
    
    Xception_model2.compile(loss="categorical_crossentropy",
                optimizer="rmsprop",
                 metrics=["accuracy"])
    
    Inception_model.compile(loss="categorical_crossentropy",
                optimizer="rmsprop",
                 metrics=["accuracy"])
    

    (IMPLEMENTATION) Train the Model

    Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.

    You are welcome to augment the training data, but this is not a requirement.

    In [49]:
    from keras.callbacks import ModelCheckpoint  
    from keras.preprocessing.image import ImageDataGenerator
    # Augment the training data
    
    ## Here we'll do data augmentation
    # Data augmentation for training
    ## Because MemoryError I can't do again data augmentation
    
    """train_datagen_augmentation_2 = ImageDataGenerator(
                    rotation_range=10,
                    width_shift_range=0.2,
                    height_shift_range=0.2,
                    shear_range=0.2,
                    zoom_range=0.1, 
                    horizontal_flip = True)
    
    train_datagen_augmentation_2.fit(train_tensors)"""
    
    epochs=25
    
    batch_size=65
    
    
    # Train the model
    # Checkpoint
    checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception.hdf5', 
                                   verbose=1, save_best_only=True)
    
    """Xception_model.fit_generator(train_datagen_augmentation.flow(train_Xception, train_targets, batch_size=batch_size),
                # Thanks to Alexis Cook  
                steps_per_epoch=train_tensors.shape[0] // batch_size,
                epochs=epochs,
                verbose=1,
                callbacks=[checkpointer],
                validation_data=(valid_Xception, valid_targets),
                shuffle=True
                )
    """
    
    Xception_model.fit(train_Xception, train_targets,
                    validation_data=(valid_Xception, valid_targets),
                    epochs=epochs,
                       callbacks=[checkpointer],
                       verbose=1
                      )
    
    Train on 6680 samples, validate on 835 samples
    Epoch 1/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4492 - acc: 0.8578Epoch 00000: val_loss improved from inf to 0.51495, saving model to saved_models/weights.best.Xception.hdf5
    6680/6680 [==============================] - 3s - loss: 0.4491 - acc: 0.8584 - val_loss: 0.5149 - val_acc: 0.8467
    Epoch 2/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4099 - acc: 0.8674Epoch 00001: val_loss improved from 0.51495 to 0.48867, saving model to saved_models/weights.best.Xception.hdf5
    6680/6680 [==============================] - 3s - loss: 0.4126 - acc: 0.8672 - val_loss: 0.4887 - val_acc: 0.8503
    Epoch 3/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3835 - acc: 0.8758Epoch 00002: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3819 - acc: 0.8757 - val_loss: 0.5019 - val_acc: 0.8575
    Epoch 4/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4157 - acc: 0.8646Epoch 00003: val_loss improved from 0.48867 to 0.48624, saving model to saved_models/weights.best.Xception.hdf5
    6680/6680 [==============================] - 3s - loss: 0.4164 - acc: 0.8638 - val_loss: 0.4862 - val_acc: 0.8587
    Epoch 5/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3842 - acc: 0.8747Epoch 00004: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3837 - acc: 0.8741 - val_loss: 0.5133 - val_acc: 0.8623
    Epoch 6/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3709 - acc: 0.8816Epoch 00005: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3704 - acc: 0.8820 - val_loss: 0.4992 - val_acc: 0.8551
    Epoch 7/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3665 - acc: 0.8761Epoch 00006: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3683 - acc: 0.8756 - val_loss: 0.4997 - val_acc: 0.8515
    Epoch 8/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3492 - acc: 0.8782Epoch 00007: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3499 - acc: 0.8783 - val_loss: 0.5219 - val_acc: 0.8587
    Epoch 9/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3547 - acc: 0.8848Epoch 00008: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3559 - acc: 0.8849 - val_loss: 0.5207 - val_acc: 0.8575
    Epoch 10/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3522 - acc: 0.8849Epoch 00009: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3542 - acc: 0.8843 - val_loss: 0.5167 - val_acc: 0.8551
    Epoch 11/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3330 - acc: 0.8924Epoch 00010: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3344 - acc: 0.8921 - val_loss: 0.5202 - val_acc: 0.8479
    Epoch 12/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3264 - acc: 0.8934Epoch 00011: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3254 - acc: 0.8930 - val_loss: 0.5409 - val_acc: 0.8539
    Epoch 13/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3318 - acc: 0.8889Epoch 00012: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3323 - acc: 0.8892 - val_loss: 0.5326 - val_acc: 0.8575
    Epoch 14/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3364 - acc: 0.8907Epoch 00013: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3348 - acc: 0.8910 - val_loss: 0.5667 - val_acc: 0.8539
    Epoch 15/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3291 - acc: 0.8928Epoch 00014: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3266 - acc: 0.8937 - val_loss: 0.5503 - val_acc: 0.8659
    Epoch 16/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3321 - acc: 0.8930Epoch 00015: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3320 - acc: 0.8933 - val_loss: 0.5617 - val_acc: 0.8503
    Epoch 17/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3140 - acc: 0.8945Epoch 00016: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3129 - acc: 0.8946 - val_loss: 0.5542 - val_acc: 0.8443
    Epoch 18/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3147 - acc: 0.8966Epoch 00017: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3132 - acc: 0.8966 - val_loss: 0.6111 - val_acc: 0.8515
    Epoch 19/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3210 - acc: 0.8954Epoch 00018: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3223 - acc: 0.8949 - val_loss: 0.5679 - val_acc: 0.8479
    Epoch 20/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.2943 - acc: 0.9003Epoch 00019: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.2946 - acc: 0.8997 - val_loss: 0.5988 - val_acc: 0.8503
    Epoch 21/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3114 - acc: 0.8968Epoch 00020: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3117 - acc: 0.8967 - val_loss: 0.5716 - val_acc: 0.8455
    Epoch 22/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.2901 - acc: 0.9041Epoch 00021: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.2921 - acc: 0.9036 - val_loss: 0.5647 - val_acc: 0.8503
    Epoch 23/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3035 - acc: 0.8998Epoch 00022: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3022 - acc: 0.9000 - val_loss: 0.5827 - val_acc: 0.8551
    Epoch 24/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.3031 - acc: 0.8991Epoch 00023: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.3047 - acc: 0.8988 - val_loss: 0.5817 - val_acc: 0.8443
    Epoch 25/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.2930 - acc: 0.9009Epoch 00024: val_loss did not improve
    6680/6680 [==============================] - 3s - loss: 0.2947 - acc: 0.8999 - val_loss: 0.6034 - val_acc: 0.8443
    
    Out[49]:
    <keras.callbacks.History at 0x7f9722ef7eb8>
    In [42]:
    """Xception_model2.fit_generator(train_datagen_augmentation_2.flow(train_Xception, train_targets, batch_size=batch_size),
                # Thanks to Alexis Cook  
                steps_per_epoch=train_tensors.shape[0] // batch_size,
                epochs=epochs,
                verbose=1,
                callbacks=[checkpointer],
                validation_data=(valid_Xception, valid_targets),
                shuffle=True
                )
    """
    Xception_model2.fit(train_Xception, train_targets,
                    validation_data=(valid_Xception, valid_targets),
                    epochs=epochs,
                       callbacks=[checkpointer],
                       verbose=1
                      )
    
    Train on 6680 samples, validate on 835 samples
    Epoch 1/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.8963 - acc: 0.1840Epoch 00000: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.9030 - acc: 0.1837 - val_loss: 12.7169 - val_acc: 0.2036
    Epoch 2/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.6246 - acc: 0.2114Epoch 00001: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.6203 - acc: 0.2117 - val_loss: 12.4504 - val_acc: 0.2228
    Epoch 3/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.5203 - acc: 0.2197Epoch 00002: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.5235 - acc: 0.2195 - val_loss: 12.4396 - val_acc: 0.2216
    Epoch 4/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3981 - acc: 0.2279Epoch 00003: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3903 - acc: 0.2283 - val_loss: 12.4940 - val_acc: 0.2216
    Epoch 5/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3691 - acc: 0.2303Epoch 00004: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3657 - acc: 0.2305 - val_loss: 12.4685 - val_acc: 0.2216
    Epoch 6/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3493 - acc: 0.2320Epoch 00005: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3531 - acc: 0.2317 - val_loss: 12.4200 - val_acc: 0.2275
    Epoch 7/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3198 - acc: 0.2339Epoch 00006: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3262 - acc: 0.2335 - val_loss: 12.4746 - val_acc: 0.2204
    Epoch 8/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3038 - acc: 0.2353Epoch 00007: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3127 - acc: 0.2347 - val_loss: 12.4022 - val_acc: 0.2287
    Epoch 9/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.3035 - acc: 0.2360Epoch 00008: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.3124 - acc: 0.2355 - val_loss: 12.4616 - val_acc: 0.2240
    Epoch 10/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2864 - acc: 0.2374Epoch 00009: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2809 - acc: 0.2377 - val_loss: 12.4289 - val_acc: 0.2263
    Epoch 11/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2889 - acc: 0.2368Epoch 00010: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2834 - acc: 0.2371 - val_loss: 12.3545 - val_acc: 0.2335
    Epoch 12/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2766 - acc: 0.2380Epoch 00011: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2735 - acc: 0.2382 - val_loss: 12.3877 - val_acc: 0.2287
    Epoch 13/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2658 - acc: 0.2389Epoch 00012: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2724 - acc: 0.2385 - val_loss: 12.3985 - val_acc: 0.2299
    Epoch 14/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2817 - acc: 0.2378Epoch 00013: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2738 - acc: 0.2383 - val_loss: 12.4559 - val_acc: 0.2251
    Epoch 15/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2676 - acc: 0.2386Epoch 00014: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2693 - acc: 0.2385 - val_loss: 12.3805 - val_acc: 0.2299
    Epoch 16/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2751 - acc: 0.2384Epoch 00015: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2648 - acc: 0.2391 - val_loss: 12.3407 - val_acc: 0.2335
    Epoch 17/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2597 - acc: 0.2393Epoch 00016: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2664 - acc: 0.2389 - val_loss: 12.4067 - val_acc: 0.2287
    Epoch 18/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2620 - acc: 0.2389Epoch 00017: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2662 - acc: 0.2386 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 19/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2532 - acc: 0.2398Epoch 00018: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 20/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2702 - acc: 0.2387Epoch 00019: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 21/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2581 - acc: 0.2395Epoch 00020: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 22/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2581 - acc: 0.2395Epoch 00021: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 23/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2653 - acc: 0.2390Epoch 00022: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 24/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2581 - acc: 0.2395Epoch 00023: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    Epoch 25/25
    6656/6680 [============================>.] - ETA: 0s - loss: 12.2702 - acc: 0.2387Epoch 00024: val_loss did not improve
    6680/6680 [==============================] - 23s - loss: 12.2647 - acc: 0.2391 - val_loss: 12.3549 - val_acc: 0.2335
    
    Out[42]:
    <keras.callbacks.History at 0x7f9754707b00>
    In [50]:
    """
    Inception_model.fit_generator(train_datagen_augmentation_2.flow(train_Inception, train_targets, batch_size=batch_size),
                # Thanks to Alexis Cook  
                steps_per_epoch=train_tensors.shape[0] // batch_size,
                epochs=epochs,
                verbose=1,
                callbacks=[checkpointer],
                validation_data=(valid_Inception, valid_targets),
                shuffle=True
                )
    """
    checkpointer2 = ModelCheckpoint(filepath='saved_models/weights.best.Inception.hdf5', 
                                   verbose=1, save_best_only=True)
    
    Inception_model.fit(train_Inception, train_targets,
                    validation_data=(valid_Inception, valid_targets),
                    epochs=epochs,
                       callbacks=[checkpointer2],
                       verbose=1
                      )
    
    Train on 6680 samples, validate on 835 samples
    Epoch 1/25
    6656/6680 [============================>.] - ETA: 0s - loss: 0.4755 - acc: 0.8540Epoch 00000: val_loss improved from inf to 0.63410, saving model to saved_models/weights.best.Inception.hdf5
    6680/6680 [==============================] - 2s - loss: 0.4756 - acc: 0.8539 - val_loss: 0.6341 - val_acc: 0.8443
    Epoch 2/25
    6528/6680 [============================>.] - ETA: 0s - loss: 0.4713 - acc: 0.8551Epoch 00001: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4709 - acc: 0.8558 - val_loss: 0.7147 - val_acc: 0.8311
    Epoch 3/25
    6496/6680 [============================>.] - ETA: 0s - loss: 0.4653 - acc: 0.8598Epoch 00002: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4653 - acc: 0.8594 - val_loss: 0.6363 - val_acc: 0.8323
    Epoch 4/25
    6656/6680 [============================>.] - ETA: 0s - loss: 0.4729 - acc: 0.8573Epoch 00003: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4717 - acc: 0.8576 - val_loss: 0.6777 - val_acc: 0.8263
    Epoch 5/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4629 - acc: 0.8620Epoch 00004: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4635 - acc: 0.8617 - val_loss: 0.7161 - val_acc: 0.8287
    Epoch 6/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4644 - acc: 0.8602Epoch 00005: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4648 - acc: 0.8596 - val_loss: 0.6648 - val_acc: 0.8383
    Epoch 7/25
    6656/6680 [============================>.] - ETA: 0s - loss: 0.4793 - acc: 0.8471Epoch 00006: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4791 - acc: 0.8472 - val_loss: 0.6678 - val_acc: 0.8251
    Epoch 8/25
    6496/6680 [============================>.] - ETA: 0s - loss: 0.4694 - acc: 0.8564Epoch 00007: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4696 - acc: 0.8567 - val_loss: 0.6784 - val_acc: 0.8311
    Epoch 9/25
    6592/6680 [============================>.] - ETA: 0s - loss: 0.4564 - acc: 0.8594Epoch 00008: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4559 - acc: 0.8597 - val_loss: 0.7217 - val_acc: 0.8407
    Epoch 10/25
    6624/6680 [============================>.] - ETA: 0s - loss: 0.4510 - acc: 0.8640Epoch 00009: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4510 - acc: 0.8641 - val_loss: 0.7041 - val_acc: 0.8299
    Epoch 11/25
    6624/6680 [============================>.] - ETA: 0s - loss: 0.4603 - acc: 0.8623Epoch 00010: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4595 - acc: 0.8623 - val_loss: 0.7285 - val_acc: 0.8323
    Epoch 12/25
    6624/6680 [============================>.] - ETA: 0s - loss: 0.4641 - acc: 0.8647Epoch 00011: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4618 - acc: 0.8653 - val_loss: 0.7700 - val_acc: 0.8359
    Epoch 13/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4575 - acc: 0.8633Epoch 00012: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4537 - acc: 0.8638 - val_loss: 0.7289 - val_acc: 0.8383
    Epoch 14/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4529 - acc: 0.8646Epoch 00013: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4534 - acc: 0.8650 - val_loss: 0.6667 - val_acc: 0.8503
    Epoch 15/25
    6528/6680 [============================>.] - ETA: 0s - loss: 0.4667 - acc: 0.8624Epoch 00014: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4674 - acc: 0.8620 - val_loss: 0.7017 - val_acc: 0.8347
    Epoch 16/25
    6656/6680 [============================>.] - ETA: 0s - loss: 0.4469 - acc: 0.8670Epoch 00015: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4457 - acc: 0.8675 - val_loss: 0.7292 - val_acc: 0.8431
    Epoch 17/25
    6592/6680 [============================>.] - ETA: 0s - loss: 0.4626 - acc: 0.8673Epoch 00016: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4640 - acc: 0.8665 - val_loss: 0.7923 - val_acc: 0.8323
    Epoch 18/25
    6592/6680 [============================>.] - ETA: 0s - loss: 0.4364 - acc: 0.8688Epoch 00017: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4341 - acc: 0.8693 - val_loss: 0.8065 - val_acc: 0.8299
    Epoch 19/25
    6528/6680 [============================>.] - ETA: 0s - loss: 0.4374 - acc: 0.8678Epoch 00018: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4392 - acc: 0.8675 - val_loss: 0.7716 - val_acc: 0.8395
    Epoch 20/25
    6528/6680 [============================>.] - ETA: 0s - loss: 0.4555 - acc: 0.8667Epoch 00019: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4629 - acc: 0.8657 - val_loss: 0.6705 - val_acc: 0.8359
    Epoch 21/25
    6624/6680 [============================>.] - ETA: 0s - loss: 0.4564 - acc: 0.8678Epoch 00020: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4560 - acc: 0.8680 - val_loss: 0.7569 - val_acc: 0.8527
    Epoch 22/25
    6592/6680 [============================>.] - ETA: 0s - loss: 0.4442 - acc: 0.8718Epoch 00021: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4452 - acc: 0.8711 - val_loss: 0.7578 - val_acc: 0.8383
    Epoch 23/25
    6624/6680 [============================>.] - ETA: 0s - loss: 0.4337 - acc: 0.8727Epoch 00022: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4349 - acc: 0.8722 - val_loss: 0.7744 - val_acc: 0.8359
    Epoch 24/25
    6528/6680 [============================>.] - ETA: 0s - loss: 0.4582 - acc: 0.8663Epoch 00023: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4591 - acc: 0.8663 - val_loss: 0.7873 - val_acc: 0.8311
    Epoch 25/25
    6560/6680 [============================>.] - ETA: 0s - loss: 0.4510 - acc: 0.8742Epoch 00024: val_loss did not improve
    6680/6680 [==============================] - 2s - loss: 0.4531 - acc: 0.8740 - val_loss: 0.7558 - val_acc: 0.8383
    
    Out[50]:
    <keras.callbacks.History at 0x7f9721e89ba8>

    (IMPLEMENTATION) Load the Model with the Best Validation Loss

    In [33]:
    ### TODO: Load the model weights with the best validation loss.
    Xception_model.load_weights("saved_models/weights.best.Xception.hdf5")
    Inception_model.load_weights("saved_models/weights.best.Inception.hdf5")
    

    (IMPLEMENTATION) Test the Model

    Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.

    In [29]:
    ### TODO: Calculate classification accuracy on the test dataset.
    Xception_predictions = [np.argmax(Xception_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Xception]
    
    test_accuracy = 100 * np.sum(np.array(Xception_predictions) == np.argmax(test_targets, axis=1)) / len(Xception_predictions)
    
    print("Xception 1 Test accuracy: %.4f%%" % test_accuracy)
    
    Xception 1 Test accuracy: 83.2536%
    
    In [30]:
    ### TODO: Calculate classification accuracy on the test dataset.
    Xception_predictions2 = [np.argmax(Xception_model2.predict(np.expand_dims(feature, axis=0))) for feature in test_Xception]
    
    test_accuracy2 = 100 * np.sum(np.array(Xception_predictions2) == np.argmax(test_targets, axis=1)) / len(Xception_predictions2)
    
    print("Xception 2 Test accuracy: %.4f%%" % test_accuracy2)
    
    Xception 2 Test accuracy: 0.3589%
    
    In [31]:
    ### TODO: Calculate classification accuracy on the test dataset.
    Inception_predictions = [np.argmax(Inception_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Inception]
    
    test_accuracy3 = 100 * np.sum(np.array(Inception_predictions) == np.argmax(test_targets, axis=1)) / len(Inception_predictions)
    
    print("Inception Test accuracy: %.4f%%" % test_accuracy3)
    
    Inception Test accuracy: 81.3397%
    

    (IMPLEMENTATION) Predict Dog Breed with the Model

    Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan_hound, etc) that is predicted by your model.

    Similar to the analogous function in Step 5, your function should have three steps:

    1. Extract the bottleneck features corresponding to the chosen CNN model.
    2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
    3. Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.

    The functions to extract the bottleneck features can be found in extract_bottleneck_features.py, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function

    extract_{network}
    
    

    where {network}, in the above filename, should be one of VGG19, Resnet50, InceptionV3, or Xception.

    In [34]:
    ### TODO: Write a function that takes a path to an image as input
    ### and returns the dog breed that is predicted by the model.
    from extract_bottleneck_features import extract_Xception
    def Xception_predict_dog_breed(img_path):
        # Extract the bottleneck features corresponding to the chosen CNN model.
        bottleneck_feature = extract_Xception(path_to_tensor(img_path))
        
        # Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
        predicted_vector = Xception_model.predict(bottleneck_feature)
        
        # Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.
        return dog_names[np.argmax(predicted_vector)]
    
    In [35]:
    ### TODO: Write a function that takes a path to an image as input
    ### and returns the dog breed that is predicted by the model.
    from extract_bottleneck_features import extract_InceptionV3
    def Inception_predict_dog_breed(img_path):
        # Extract the bottleneck features corresponding to the chosen CNN model.
        bottleneck_feature = extract_InceptionV3(path_to_tensor(img_path))
        
        # Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
        predicted_vector = Inception_model.predict(bottleneck_feature)
        
        # Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.
        return dog_names[np.argmax(predicted_vector)]
    

    Step 6: Write your Algorithm

    Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

    • if a dog is detected in the image, return the predicted breed.
    • if a human is detected in the image, return the resembling dog breed.
    • if neither is detected in the image, provide output that indicates an error.

    You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 5 to predict dog breed.

    Some sample output for our algorithm is provided below, but feel free to design your own user experience!

    Sample Human Output

    (IMPLEMENTATION) Write your Algorithm + OPTIONAL ADD FUNCTIONALITY FOR DOG MUTTS

    In [36]:
    ### TODO: Write your algorithm.
    ### Feel free to use as many code cells as needed.
    
    import os
    import matplotlib.pyplot as plt
    from scipy.misc import imread
    
    # img_path: Image
    # human_detector_chosen: select the human_detector function that you want to use: face_detector, face_detector_CNN
    # dog_breed_predictor_chosen: select the dog breed_predictor that you want to use: Xception_predict_dog_breed / Inception_predict_dog_breed
    
    #
    
    def detector(img_path, human_detector_chosen,  dog_breed_predictor_chosen): 
        print("Hello, let see if I can detect what you are:")
        print("...")
    
        isDog = dog_detector(img_path)
        isHuman = human_detector_chosen(img_path)
        breed = ""
       
        # Dog 
        if isDog and not isHuman:
            # Find breed
            breed = dog_breed_predictor_chosen(img_path)
            print("You're a dog and you're a ", breed)
        
        # Human
        elif isHuman and not isDog:
            # Find breed
            breed = dog_breed_predictor_chosen(img_path)
            print("You're a human, but your face look like a ", breed)
        
        else:
            print("There is an error, I can't say that you're a human or dog... so who are you?!")
    

    Step 7: Test Your Algorithm

    In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

    (IMPLEMENTATION) Test Your Algorithm on Sample Images!

    Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

    Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

    Answer:

    The data

    • Munchin Cat
    • Golden Retriever
    • German Shepeard
    • French President Emmanuel Macron
    • Book "Le fantome de l'opéra"
    • Louise
    • Me

    The results

    Testing with Xception + face_detector:

    • Munchin Cat: correct detection
    • Golden Retriever: correct detection / correct breed
    • German Shepeard: correct detection / incorrect breed
    • French President Emmanuel Macron : correct detection
    • Book "Le fantome de l'opéra": incorrect detection
    • Louise : correct detection
    • Me: correct detection
    For this test: all dogs were detected / An incorrect breed / All humans detected / An object detected as human

    Testing with Xception + face_detector_CNN:

    • Munchin Cat: correct detection
    • Golden Retriever: correct detection / correct breed
    • German Shepeard: correct detection / incorrect breed
    • French President Emmanuel Macron : correct detection
    • Book "Le fantome de l'opéra": correct detection
    • Louise : incorrect detection
    • Me: correct detection
    For this test: all dogs were detected / A human was not detected Thing to notice: this face_detector_CNN recognizes the fact that the book was not a human face.

    Testing with Inception + face_detector:

    • Munchin Cat: correct detection
    • Golden Retriever: correct detection / incorrect breed
    • German Shepeard: correct detection / incorrect breed
    • French President Emmanuel Macron : correct detection
    • Book "Le fantome de l'opéra": incorrect detection
    • Louise : correct detection
    • Me: correct detection
    For this test: all dogs were detected / all dogs breed were false

    Testing with Inception + face_detector_CNN:

    • Munchin Cat: correct detection
    • Golden Retriever: correct detection / incorrect breed
    • German Shepeard: correct detection / incorrect breed
    • French President Emmanuel Macron : correct detection
    • Book "Le fantome de l'opéra": correct detection
    • Louise : incorrect detection
    • Me: correct detection
    For this test: all dogs were detected / all dogs breed were false

    CONCLUSION

    As a conclusion, the best testing result is made by the first algorithm : X_ception + face_detector it's logic because they have the best accuracy (83% for the breed detector and 99% for the face detector).

    To improve this algorithm we can:

    • Add more filters to capture more complex partterns in our data.
    • Add more data and augmented data to improve our model's translation and zooming invariance
    • Add more epochs in the training process and more layers in our X_ception model to have a improve the accuracy.

    The complexity of that is:

    • Keeping a reasonable computation time.
    • Avoid Memory Error --> By using a monitor system like TensorBoard in Tensorflow it can be interesting to compare the results.

    Testing with Xception + face_detector

    In [38]:
    ## TODO: Execute your algorithm from Step 6 on
    ## at least 6 images on your computer.
    ## Feel free to use as many code cells as needed.
    import glob
    import matplotlib.pyplot as plt
    import matplotlib.image as mpimg
    
    
    
    for i in glob.iglob('step6_images/*'):
        print("\n\n")
        print("NEW IMAGE ")
        img = mpimg.imread(i)
        imgplot = plt.imshow(img)
        plt.show()
        detector(i, face_detector, Xception_predict_dog_breed)
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Golden_retriever
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Belgian_tervuren
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Chihuahua
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Brussels_griffon
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  English_toy_spaniel
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Chihuahua
    
    Out[38]:
    'def display_photo(path):\n    print("File: {}".format(path))\n    species, breed = what_is_it(path)\n    print("Looks like a {}".format(species))\n    print("Could be a {}".format(breed))\n    img=mpimg.imread(path)\n    imgplot = plt.imshow(img)\n    plt.show()\n    \nfor path in glob.iglob(\'my_photos/*\'):\n    display_photo(path)\n\n\npath_to_img = \'\nimages = [im for im in listdir(path_to_img) if isfile(join(path_to_img, im))]\n\n"step6_images/MunchkinCat.jpg", f\ndetector("step6_images/Loulou.jpg", face_detector, Xception_predict_dog_breed)\n\nfor image in images:\n    image_to_test = path_to_img + image\n    print(image_to_test)\n    plt.imshow(imread(image_to_test))\n    plt.show()\n    detector(image_to_test, face_detector, Xception_predict_dog_breed)\n'

    Testing with Xception + face_detector_CNN

    In [39]:
    for i in glob.iglob('step6_images/*'):
        print("\n\n")
        print("NEW IMAGE ")
        img = mpimg.imread(i)
        imgplot = plt.imshow(img)
        plt.show()
        detector(i, face_detector_CNN, Xception_predict_dog_breed)
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Golden_retriever
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Belgian_tervuren
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Chihuahua
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Chihuahua
    

    Testing with Inception + face_detector

    In [40]:
    for i in glob.iglob('step6_images/*'):
        print("\n\n")
        print("NEW IMAGE ")
        img = mpimg.imread(i)
        imgplot = plt.imshow(img)
        plt.show()
        detector(i, face_detector, Inception_predict_dog_breed)
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.5/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
    You're a dog and you're a  Kuvasz
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Belgian_tervuren
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Portuguese_water_dog
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Irish_terrier
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Manchester_terrier
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Canaan_dog
    

    Testing with Inception + face_detector_CNN

    In [41]:
    for i in glob.iglob('step6_images/*'):
        print("\n\n")
        print("NEW IMAGE ")
        img = mpimg.imread(i)
        imgplot = plt.imshow(img)
        plt.show()
        detector(i, face_detector_CNN, Inception_predict_dog_breed)
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Kuvasz
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a dog and you're a  Belgian_tervuren
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Portuguese_water_dog
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    There is an error, I can't say that you're a human or dog... so who are you?!
    
    
    
    NEW IMAGE 
    
    Hello, let see if I can detect what you are:
    ...
    You're a human, but your face look like a  Canaan_dog